45 research outputs found

    Does the Outcome of the US-China Trade War Meet the Purpose?

    Get PDF
    Nowadays, as the two largest economies in the world, China and US economic relations have expanded substantially over the past decades. China is the US's second-largest merchandise trading partner, third-largest export market, and the most significant source of imports. (Li, He & Lin, 2018). During Trump’s presidency, he advocated tariffs to reduce the deficit of the US and promote domestic manufacturing. Our interest is to see whether the Trade War reduces the US’s trade deficit with China and whether it is indispensable. People might ask, hasn’t this topic been studied before? The answer is yes, and no. Our group cogitate and propose a fresh idea, which is the Treatment Effect analysis put forward by Hsiao, Ching, and Wan. This method is our most preferred option since it is considered as one of the ‘simplest’ and ‘accurate’ method in the treatment effects estimation literature. We run the related regression and predict the counterfactual trade deficit between the US and China as if the Trade War had not existed. Then we calculate the average difference between the actual value and the predicted ones. Finally, we conclude that instead of reduces, the Trade War increases 1.5 million of the US’s trade deficit with China. We hope that this freshly new method with the latest data will give out new cognitions of the Trade War for people interested in this area and inspire future research in this field

    Task-agnostic Exploration in Reinforcement Learning

    Full text link
    Efficient exploration is one of the main challenges in reinforcement learning (RL). Most existing sample-efficient algorithms assume the existence of a single reward function during exploration. In many practical scenarios, however, there is not a single underlying reward function to guide the exploration, for instance, when an agent needs to learn many skills simultaneously, or multiple conflicting objectives need to be balanced. To address these challenges, we propose the \textit{task-agnostic RL} framework: In the exploration phase, the agent first collects trajectories by exploring the MDP without the guidance of a reward function. After exploration, it aims at finding near-optimal policies for NN tasks, given the collected trajectories augmented with \textit{sampled rewards} for each task. We present an efficient task-agnostic RL algorithm, \textsc{UCBZero}, that finds ϵ\epsilon-optimal policies for NN arbitrary tasks after at most O~(log(N)H5SA/ϵ2)\tilde O(\log(N)H^5SA/\epsilon^2) exploration episodes. We also provide an Ω(log(N)H2SA/ϵ2)\Omega(\log (N)H^2SA/\epsilon^2) lower bound, showing that the log\log dependency on NN is unavoidable. Furthermore, we provide an NN-independent sample complexity bound of \textsc{UCBZero} in the statistically easier setting when the ground truth reward functions are known

    Axiomatic Interpretability for Multiclass Additive Models

    Full text link
    Generalized additive models (GAMs) are favored in many regression and binary classification problems because they are able to fit complex, nonlinear functions while still remaining interpretable. In the first part of this paper, we generalize a state-of-the-art GAM learning algorithm based on boosted trees to the multiclass setting, and show that this multiclass algorithm outperforms existing GAM learning algorithms and sometimes matches the performance of full complexity models such as gradient boosted trees. In the second part, we turn our attention to the interpretability of GAMs in the multiclass setting. Surprisingly, the natural interpretability of GAMs breaks down when there are more than two classes. Naive interpretation of multiclass GAMs can lead to false conclusions. Inspired by binary GAMs, we identify two axioms that any additive model must satisfy in order to not be visually misleading. We then develop a technique called Additive Post-Processing for Interpretability (API), that provably transforms a pre-trained additive model to satisfy the interpretability axioms without sacrificing accuracy. The technique works not just on models trained with our learning algorithm, but on any multiclass additive model, including multiclass linear and logistic regression. We demonstrate the effectiveness of API on a 12-class infant mortality dataset.Comment: KDD 201

    Federated Multi-Level Optimization over Decentralized Networks

    Full text link
    Multi-level optimization has gained increasing attention in recent years, as it provides a powerful framework for solving complex optimization problems that arise in many fields, such as meta-learning, multi-player games, reinforcement learning, and nested composition optimization. In this paper, we study the problem of distributed multi-level optimization over a network, where agents can only communicate with their immediate neighbors. This setting is motivated by the need for distributed optimization in large-scale systems, where centralized optimization may not be practical or feasible. To address this problem, we propose a novel gossip-based distributed multi-level optimization algorithm that enables networked agents to solve optimization problems at different levels in a single timescale and share information through network propagation. Our algorithm achieves optimal sample complexity, scaling linearly with the network size, and demonstrates state-of-the-art performance on various applications, including hyper-parameter tuning, decentralized reinforcement learning, and risk-averse optimization.Comment: arXiv admin note: substantial text overlap with arXiv:2206.1087
    corecore